33 research outputs found

    Computational complexity and memory usage for multi-frontal direct solvers in structured mesh finite elements

    Full text link
    The multi-frontal direct solver is the state-of-the-art algorithm for the direct solution of sparse linear systems. This paper provides computational complexity and memory usage estimates for the application of the multi-frontal direct solver algorithm on linear systems resulting from B-spline-based isogeometric finite elements, where the mesh is a structured grid. Specifically we provide the estimates for systems resulting from Cp−1C^{p-1} polynomial B-spline spaces and compare them to those obtained using C0C^0 spaces.Comment: 8 pages, 2 figure

    Linear computational cost implicit solver for parabolic problems

    Get PDF
    In this paper, we use the alternating direction method for isogeometric finite elements to simulate implicit dynamics. Namely, we focus on a parabolic problem and use B-spline basis functions in space and an implicit marching method to fully discretize the problem. We introduce intermediate time steps and separate our differential operator into a summation of the blocks, acting along a particular coordinate axis in the intermediate time steps. We show that the resulting stiffness matrix can be represented as a multiplication of two (in 2D) or three (in 3D) multi-diagonal matrices, each one with B-spline basis functions along the particular axis of the spatial system of coordinates. As a result of this algebraic transformations, we get a system of linear equations that can be factorized in linear O(N)O(N) computational cost in every time step of the implicit method. We use our method to simulate the heat transfer problem. We demonstrate theoretically and verify numerically that our implicit method is unconditionally stable for heat transfer problems (i.e., parabolic). We conclude our presentation with a discussion on the limitations of the method

    Automatic stabilization of finite-element simulations using neural networks and hierarchical matrices

    Full text link
    Petrov-Galerkin formulations with optimal test functions allow for the stabilization of finite element simulations. In particular, given a discrete trial space, the optimal test space induces a numerical scheme delivering the best approximation in terms of a problem-dependent energy norm. This ideal approach has two shortcomings: first, we need to explicitly know the set of optimal test functions; and second, the optimal test functions may have large supports inducing expensive dense linear systems. Nevertheless, parametric families of PDEs are an example where it is worth investing some (offline) computational effort to obtain stabilized linear systems that can be solved efficiently, for a given set of parameters, in an online stage. Therefore, as a remedy for the first shortcoming, we explicitly compute (offline) a function mapping any PDE-parameter, to the matrix of coefficients of optimal test functions (in a basis expansion) associated with that PDE-parameter. Next, as a remedy for the second shortcoming, we use the low-rank approximation to hierarchically compress the (non-square) matrix of coefficients of optimal test functions. In order to accelerate this process, we train a neural network to learn a critical bottleneck of the compression algorithm (for a given set of PDE-parameters). When solving online the resulting (compressed) Petrov-Galerkin formulation, we employ a GMRES iterative solver with inexpensive matrix-vector multiplications thanks to the low-rank features of the compressed matrix. We perform experiments showing that the full online procedure as fast as the original (unstable) Galerkin approach. In other words, we get the stabilization with hierarchical matrices and neural networks practically for free. We illustrate our findings by means of 2D Eriksson-Johnson and Hemholtz model problems.Comment: 28 pages, 16 figures, 4 tables, 6 algorithm
    corecore